在工程应用程序数据集中找到有意义的概念,这些数据集可以在许多情况下进行明智的设计分组。它允许确定具有相似属性的不同设计组,并在工程决策过程中提供有用的知识。此外,它为进一步的特定设计候选者提供了一条路线,这些候选者表现出某些特征。在这项工作中,提出了一种在现有工程数据集中定义有意义且一致的概念的方法。数据集中的设计的特征是多种功能,例如设计参数,几何特性或设计的设计参数,以适应各种边界条件。在提议的方法中,将完整的功能集分为几个称为描述空间的子集。概念的定义尊重这种分区,这导致了确定概念的几个理想属性,这是通过最先进的聚类或概念识别方法无法实现的。提出了一种新颖的概念质量度量,该度量为数据集中的概念定义提供了客观价值。通过考虑一个由约2500个机翼轮廓组成的现实工程数据集,可以证明该度量的有用性,其中通过计算流体动力学模拟获得了三种不同操作条件的性能值(升力和阻力)。采用了一个数值优化过程,可最大程度地提高概念质量度量,并为描述空间的不同设置找到有意义的概念,同时还结合了用户偏好。已经证明了如何使用这些概念来选择数据集的原型代表,这些代表表现出每个概念的特征。
translated by 谷歌翻译
由自我发项层组成的大型,预训练的神经网络(变形金刚)最近在几种语音情绪识别(SER)数据集上取得了最新的结果。这些模型通常以自我监督的方式进行预训练,以提高自动语音识别性能,从而了解语言信息。在这项工作中,我们研究了在Ser微调过程中利用此信息的程度。使用基于开源工具的可重现方法,我们在改变文本的情感时综合了韵律中性的语音话语。变压器模型的价预测对正面和负面情绪含量以及否定性非常反应,但对增强剂或还原器不反应,而这些语言特征都没有影响唤醒或优势。这些发现表明,变形金刚可以成功利用语言信息来改善其价预测,并且应将语言分析包括在其测试中。
translated by 谷歌翻译
通过加强学习(RL)解决机器人导航任务是由于其稀疏奖励和长决策范围自然而挑战。但是,在许多导航任务中,可以使用高级(HL)任务表示,如粗略楼层。以前的工作通过HL表示中的路径规划组成的层次方法和使用从计划导出的子目标来指导源任务中的RL策略的子目标来证明了高效的学习。然而,这些方法通常忽略计划期间机器人的复杂动态和子最优的子目标达到能力。通过提出利用用于HL代表的培训计划政策的新型分层框架,这项工作克服了这些限制。因此,可以利用收集的卷展数据来学习机器人能力和环境条件。我们专门以学习的转换模型(VI-RL)为基础介绍一个规划策略。在模拟机器人导航任务中,VI-RL对Vanilla RL的一致强烈改善,与单个布局的单个布局有关,但更广泛适用于多个布局,并且与停车处的可训练HL路径规划基准相提并论具有困难的非完全动态的任务,其中它显示了显着的改进。
translated by 谷歌翻译
We address the challenge of building domain-specific knowledge models for industrial use cases, where labelled data and taxonomic information is initially scarce. Our focus is on inductive link prediction models as a basis for practical tools that support knowledge engineers with exploring text collections and discovering and linking new (so-called open-world) entities to the knowledge graph. We argue that - though neural approaches to text mining have yielded impressive results in the past years - current benchmarks do not reflect the typical challenges encountered in the industrial wild properly. Therefore, our first contribution is an open benchmark coined IRT2 (inductive reasoning with text) that (1) covers knowledge graphs of varying sizes (including very small ones), (2) comes with incidental, low-quality text mentions, and (3) includes not only triple completion but also ranking, which is relevant for supporting experts with discovery tasks. We investigate two neural models for inductive link prediction, one based on end-to-end learning and one that learns from the knowledge graph and text data in separate steps. These models compete with a strong bag-of-words baseline. The results show a significant advance in performance for the neural approaches as soon as the available graph data decreases for linking. For ranking, the results are promising, and the neural approaches outperform the sparse retriever by a wide margin.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
Efficient surrogate modelling is a key requirement for uncertainty quantification in data-driven scenarios. In this work, a novel approach of using Sparse Random Features for surrogate modelling in combination with self-supervised dimensionality reduction is described. The method is compared to other methods on synthetic and real data obtained from crashworthiness analyses. The results show a superiority of the here described approach over state of the art surrogate modelling techniques, Polynomial Chaos Expansions and Neural Networks.
translated by 谷歌翻译
In recent years distributional reinforcement learning has produced many state of the art results. Increasingly sample efficient Distributional algorithms for the discrete action domain have been developed over time that vary primarily in the way they parameterize their approximations of value distributions, and how they quantify the differences between those distributions. In this work we transfer three of the most well-known and successful of those algorithms (QR-DQN, IQN and FQF) to the continuous action domain by extending two powerful actor-critic algorithms (TD3 and SAC) with distributional critics. We investigate whether the relative performance of the methods for the discrete action space translates to the continuous case. To that end we compare them empirically on the pybullet implementations of a set of continuous control tasks. Our results indicate qualitative invariance regarding the number and placement of distributional atoms in the deterministic, continuous action setting.
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
Understanding our brain is one of the most daunting tasks, one we cannot expect to complete without the use of technology. MindBigData aims to provide a comprehensive and updated dataset of brain signals related to a diverse set of human activities so it can inspire the use of machine learning algorithms as a benchmark of 'decoding' performance from raw brain activities into its corresponding (labels) mental (or physical) tasks. Using commercial of the self, EEG devices or custom ones built by us to explore the limits of the technology. We describe the data collection procedures for each of the sub datasets and with every headset used to capture them. Also, we report possible applications in the field of Brain Computer Interfaces or BCI that could impact the life of billions, in almost every sector like healthcare game changing use cases, industry or entertainment to name a few, at the end why not directly using our brains to 'disintermediate' senses, as the final HCI (Human-Computer Interaction) device? simply what we call the journey from Type to Touch to Talk to Think.
translated by 谷歌翻译
Modern mobile burst photography pipelines capture and merge a short sequence of frames to recover an enhanced image, but often disregard the 3D nature of the scene they capture, treating pixel motion between images as a 2D aggregation problem. We show that in a "long-burst", forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth. To this end, we devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion. Our plane plus depth model is trained end-to-end, and performs coarse-to-fine refinement by controlling which multi-resolution volume features the network has access to at what time during training. We validate the method experimentally, and demonstrate geometrically accurate depth reconstructions with no additional hardware or separate data pre-processing and pose-estimation steps.
translated by 谷歌翻译